Identifying Ad Hoc Queries
One problem that can plague a
production system is the execution of ad hoc queries against the
production database. If you want to identify ad hoc queries, the
application, and the users who are running them, SQL Profiler is your
tool. You can create a trace as follows:
1. | Create a new trace, using the SQLProfilerStandard template.
|
2. | Add a new ApplicationName filter with Like Microsoft%.
|
When this trace is run,
you can identify database access that is happening via SSMS or
Microsoft Access. The user, the duration, and the actual SQL statement
are captured. An alternative would be to change the ApplicationName
filter to trace application access for all application names that are
not like the name of your production applications, such as Not Like MyOrderEntryApp%.
Identifying Performance Bottlenecks
Another common problem
with database applications is identifying performance bottlenecks. For
example, say that an application is running slow, but you’re not sure
why. You tested all the SQL statements and stored procedures used by the
application, and they were relatively fast. Yet you find that some of
the application screens are slow. Is it the database server? Is it the
client machine? Is it the network? These are all good questions, but
what is the answer? SQL Profiler can help you find out.
You can start with the same trace definition used in the preceding section. For this scenario, you need to specify an ApplicationName filter with the name of the application you want to trace. You might also want to apply a filter to a specific NTUserName to further refine your trace and avoid gathering trace information for users other than the one that you have isolated.
After you start your trace, you
use the slow-running application’s screens. You need to look at the
trace output and take note of the duration of the statements as they
execute on the database server. Are they relatively fast? How much time
was spent on the execution of the SQL statements and stored procedures
relative to the response time of the application screen? If the total
database duration is 1,000 milliseconds (1 second), and the screen takes
10 seconds to refresh, you need to examine other factors, such as the
network or the application code.
With SQL Server 2008,
you also combine Windows System Monitor (Perfmon) output with trace
output to identify performance bottlenecks. This feature helps unite
system-level metrics (for example, CPU utilization, memory usage) with
SQL Server performance metrics. The result is a very impressive display
that is synchronized based on time so that a correlation can be made
between system-level spikes and the related SQL Server statements.
To try out this powerful
new feature, you open the Perfmon application and add a new performance
counter log. For simplicity, you can just add one counter, such as % Processor Time.
Then you choose the option to manually start the log and click OK. Now,
you want to apply some kind of load to the SQL Server system. The
following script does index maintenance on two tables in the AdventureWorks2008 database and can be used to apply a sample load:
USE [AdventureWorks2008]
GO
ALTER INDEX [PK_SalesOrderDetail_SalesOrderID_SalesOrderDetailID]
ON [Sales].[SalesOrderDetail]
REORGANIZE WITH ( LOB_COMPACTION = ON )
GO
PRINT 'FIRST INDEX IS REBUILT'
WAITFOR DELAY '00:00:05'
USE [AdventureWorks2008]
GO
ALTER INDEX [PK_Person_BusinessEntityID]
ON [Person].[Person] REBUILD WITH
( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON,
SORT_IN_TEMPDB = OFF )
GO
PRINT 'SECOND INDEX IS REORGANIZED'
Next, you open the script in SSMS, but you don’t run it yet. You open SQL Profiler and create a trace by using the Standard Profiler template. This template captures basic SQL Server activity and also includes the StartTime and EndTime
columns that are necessary to correlate with the Perfmon counters. Now
you are ready to start the performance log and the SQL Server Profiler
trace. When they are running, you can run the sample load script. When
the script has completed, you stop the performance log and Profiler
trace. You save the Profiler trace to a file and then open the file in
the Profiler application.
The correlation of the
Perfmon log to the trace output file is accomplished from within the
Profiler application. To do this, you select File, Import Performance
Data. Then you select the performance log file that was just created;
these files are located by default in the c:\perflogs
folder. After you import the performance data, a new performance graph
and associated grid with the performance counters is displayed in the
Profiler, as shown in Figure 2
Now
the fun begins! If you click one of the statements captured in the
Profiler grid, a vertical red line appears in the Perfmon graph that
reflects the time at which the statement was run. Conversely, if you
click a location in the graph, the corresponding SQL statement that was
run at that time is highlighted in the grid. If you see a spike in CPU
in the Perfmon graph, you can click the spike in the graph and find the
statement that may have caused the spike. This can help you quickly and
efficiently identify bottlenecks and the processes contributing to it.
Monitoring Auto-Update Statistics
In some environments,
excessive auto-updating of statistics can affect system performance
while the statistics are being updated. SQL Profiler can be used to
monitor auto-updating of statistics as well as automatic statistics
creation.
To monitor auto-updating of statistics, you create a trace and include the AutoStats event from the Performance event category. Then you select the TextData, Integer Data, Success, and Object ID columns. When the AutoStats event is captured, the Integer Data column contains the number of statistics updated for a given table, the Object ID is the ID of the table, and the TextData column contains names of the columns together with either an Updated: or Created: prefix. The Success column contains potential failure indication.
If you see an excessive number of AutoStats
events on a table or index, and the duration is high, it could be
affecting system performance. You might want to consider disabling
auto-update for statistics on that table and schedule statistics to be
updated periodically during nonpeak periods. You may also want to
utilize the AUTO_UPDATE_STATISTICS_ASYNC
database setting, which allows queries that utilize affected statistics
to compile without having to wait for the update of statistics to
complete.
Monitoring Application Progress
The 10 user-configurable
events can be used in a variety of ways, including for tracking the
progress of an application or procedure. For instance, perhaps you have a
complex procedure that is subject to lengthy execution. You can add
debugging logic in this procedure to allow for real-time benchmarking
via SQL Profiler.
The key to this type of profiling is the use of the sp_trace_generateevent stored procedure, which enables you to launch the User configurable event. The procedure needs to reference one of the User configurable event IDs (82 to 91) that correspond to the User configurable event 0 to 9. If you execute the procedure with eventid = 82, then User configurable event 0 catches these events.
Listing 1 contains a sample stored procedure that (in debug mode) triggers the trace events that SQL Profiler can capture.
Listing 1. A Stored Procedure That Raises User configurable Events for SQL Profiler
CREATE PROCEDURE SampleApplicationProc (@debug bit = 0) as declare @userinfoParm nvarchar(128) select @userinfoParm = getdate()
--if in debug mode, then launch event for Profiler -- indicating Start of Application Proc if @debug =1 begin SET @userinfoParm = 'Proc Start: ' + convert(varchar(30),getdate(),120) EXEC sp_trace_generateevent @eventid = 83, @userinfo = @userinfoparm end
--Real world would have complex proc code executing here --The WAITFOR statement was added to simulate processing time WAITFOR DELAY '00:00:05'
---if debug mode, then launch event indicating next significant stage if @debug =1 begin SET @userinfoParm = 'Proc Stage One Complete: ' + convert(varchar(20),getdate(),120) EXEC sp_trace_generateevent @eventid = 83, @userinfo = @userinfoparm end
--Real world would have more complex proc code executing here --The WAITFOR statement was added to simulate processing time WAITFOR DELAY '00:00:05' —5 second delay
---if debug mode, then launch event indicating next significant stage if @debug =1 begin SET @userinfoParm = 'Proc Stage Two Complete: ' + convert(varchar(30),getdate(),120) EXEC sp_trace_generateevent @eventid = 83, @userinfo = @userinfoparm end
--You get the idea
GO
|
Now you need to set up a new trace that includes the UserConfigurable:1 event. To do so, you choose the TextData data column to capture the User configurable
output and any other data columns that make sense for your specific
trace. After this task is complete, you can launch the sample stored
procedure from Listing 1
and get progress information via SQL Profiler as the procedure
executes. You can accumulate execution statistics over time with this
kind of trace and summarize the results. The execution command for the
procedure follows:
EXEC SampleApplicationProc @debug = 1
The resulting SQL Profiler results are shown in Figure 3.
There are many other applications for User configurable
events. How you use them depends on your specific need. As is the case
with many Profiler scenarios, there are seemingly endless possibilities.